28 research outputs found

    Towards a neural-level cognitive architecture: modeling behavior in working memory tasks with neurons

    Full text link
    Constrained by results from classic behavioral experiments we provide a neural-level cognitive architecture for modeling behavior in working memory tasks. We propose a canonical microcircuit that can be used as a building block for working memory, decision making and cognitive control. The controller controls gates to route the flow of information between the working memory and the evidence accumulator and sets parameters of the circuits. We show that this type of cognitive architecture can account for results in behavioral experiments such as judgment of recency, probe recognition and delayedmatch- to-sample. In addition, the neural dynamics generated by the cognitive architecture provides a good match with neurophysiological data from rodents and monkeys. For instance, it generates cells tuned to a particular amount of elapsed time (time cells), to a particular position in space (place cells) and to a particular amount of accumulated evidence.http://sites.bu.edu/tcn/files/2019/05/Cogsci2019_TiganjEtal.pdfAccepted manuscrip

    Evidence accumulation in a Laplace domain decision space

    Full text link
    Evidence accumulation models of simple decision-making have long assumed that the brain estimates a scalar decision variable corresponding to the log-likelihood ratio of the two alternatives. Typical neural implementations of this algorithmic cognitive model assume that large numbers of neurons are each noisy exemplars of the scalar decision variable. Here we propose a neural implementation of the diffusion model in which many neurons construct and maintain the Laplace transform of the distance to each of the decision bounds. As in classic findings from brain regions including LIP, the firing rate of neurons coding for the Laplace transform of net accumulated evidence grows to a bound during random dot motion tasks. However, rather than noisy exemplars of a single mean value, this approach makes the novel prediction that firing rates grow to the bound exponentially, across neurons there should be a distribution of different rates. A second set of neurons records an approximate inversion of the Laplace transform, these neurons directly estimate net accumulated evidence. In analogy to time cells and place cells observed in the hippocampus and other brain regions, the neurons in this second set have receptive fields along a "decision axis." This finding is consistent with recent findings from rodent recordings. This theoretical approach places simple evidence accumulation models in the same mathematical language as recent proposals for representing time and space in cognitive models for memory.Comment: Revised for CB

    Evidence accumulation in a Laplace domain decision space

    Full text link
    Evidence accumulation models of simple decision-making have long assumed that the brain estimates a scalar decision variable corresponding to the log likelihood ratio of the two alternatives. Typical neural implementations of this algorithmic cognitive model assume that large numbers of neurons are each noisy exemplars of the scalar decision variable. Here, we propose a neural implementation of the diffusion model in which many neurons construct and maintain the Laplace transform of the distance to each of the decision bounds. As in classic findings from brain regions including LIP, the firing rate of neurons coding for the Laplace transform of net accumulated evidence grows to a bound during random dot motion tasks. However, rather than noisy exemplars of a single mean value, this approach makes the novel prediction that firing rates grow to the bound exponentially; across neurons, there should be a distribution of different rates. A second set of neurons records an approximate inversion of the Laplace transform; these neurons directly estimate net accumulated evidence. In analogy to time cells and place cells observed in the hippocampus and other brain regions, the neurons in this second set have receptive fields along a “decision axis.” This finding is consistent with recent findings from rodent recordings. This theoretical approach places simple evidence accumulation models in the same mathematical language as recent proposals for representing time and space in cognitive models for memory.Accepted manuscrip

    Is working memory stored along a logarithmic timeline? Converging evidence from neuroscience, behavior and models

    Full text link
    A growing body of evidence suggests that short-term memory does not only store the identity of recently experienced stimuli, but also information about when they were presented. This representation of 'what' happened 'when' constitutes a neural timeline of recent past. Behavioral results suggest that people can sequentially access memories for the recent past, as if they were stored along a timeline to which attention is sequentially directed. In the short-term judgment of recency (JOR) task, the time to choose between two probe items depends on the recency of the more recent probe but not on the recency of the more remote probe. This pattern of results suggests a backward self-terminating search model. We review recent neural evidence from the macaque lateral prefrontal cortex (lPFC) (Tiganj, Cromer, Roy, Miller, & Howard, in press) and behavioral evidence from human JOR task (Singh & Howard, 2017) bearing on this question. Notably, both lines of evidence suggest that the timeline is logarithmically compressed as predicted by Weber-Fechner scaling. Taken together, these findings provide an integrative perspective on temporal organization and neural underpinnings of short-term memory.R01 EB022864 - NIBIB NIH HHS; R01 MH112169 - NIMH NIH HHSAccepted manuscrip

    An algebraic method for eye blink artifacts detection in single channel EEG recordings

    Get PDF
    International audienceSingle channel EEG systems are very useful in EEG based applications where real time processing, low computational complexity and low cumbersomeness are critical constrains. These include brain-computer interface and biofeedback devices and also some clinical applications such as EEG recording on babies or Alzheimer's disease recognition. In this paper we address the problem of eye blink artifacts detection in such systems. We study an algebraic approach based on numerical differentiation, which is recently introduced from operational calculus. The occurrence of an artifact is modeled as an irregularity which appears explicitly in the time (generalized) derivative of the EEG signal as a delay. Manipulating such delay is easy with the operational calculus and it leads to a simple joint detection and localization algorithm. While the algorithm is devised based on continuous-time arguments, the final implementation step is fully realized in a discrete-time context, using very classical discrete-time FIR filters. The proposed approach is compared with three other approaches: (1) the very basic threshold approach, (2) the approach that combines the use of median filter, matched filter and nonlinear energy operator (NEO) and (3) the wavelet based approach. Comparison is done on: (a) the artificially created signal where the eye activity is synthesized from real EEG recordings and (b) the real single channel EEG recordings from 32 different brain locations. Results are presented with Receiver Operating Characteristics curves. The results show that the proposed approach compares to the other approaches better or as good as, while having lower computational complexity with simple real time implementation. Comparison of the results on artificially created and real signal leads to conclusions that with detection techniques based on derivative estimation we are able to detect not only eye blink artifacts, but also any spike shaped artifact, even if it is very low in amplitude

    DeepSITH: efficient learning via decomposition of what and when across time scales

    Full text link
    Extracting temporal relationships over a range of scales is a hallmark of human perception and cognition -- and thus it is a critical feature of machine learning applied to real-world problems. Neural networks are either plagued by the exploding/vanishing gradient problem in recurrent neural networks (RNNs) or must adjust their parameters to learn the relevant time scales (e.g., in LSTMs). This paper introduces DeepSITH, a network comprising biologically-inspired Scale-Invariant Temporal History (SITH) modules in series with dense connections between layers. SITH modules respond to their inputs with a geometrically-spaced set of time constants, enabling the DeepSITH network to learn problems along a continuum of time-scales. We compare DeepSITH to LSTMs and other recent RNNs on several time series prediction and decoding tasks. DeepSITH achieves state-of-the-art performance on these problems.https://papers.nips.cc/paper/2021/file/e7dfca01f394755c11f853602cb2608a-Paper.pdfPublished versio

    A temporal record of the past with a spectrum of time constants in the monkey entorhinal cortex

    Get PDF
    Episodic memory is believed to be intimately related to our experience of the passage of time. Indeed, neurons in the hippocampus and other brain regions critical to episodic memory code for the passage of time at a range of timescales. The origin of this temporal signal, however, remains unclear. Here, we examined temporal responses in the entorhinal cortex of macaque monkeys as they viewed complex images. Many neurons in the entorhinal cortex were responsive to image onset, showing large deviations from baseline firing shortly after image onset but relaxing back to baseline at different rates. This range of relaxation rates allowed for the time since image onset to be decoded on the scale of seconds. Further, these neurons carried information about image content, suggesting that neurons in the entorhinal cortex carry information about not only when an event took place but also, the identity of that event. Taken together, these findings suggest that the primate entorhinal cortex uses a spectrum of time constants to construct a temporal record of the past in support of episodic memory.P51 OD010425 - NIH HHS; R01 MH117777 - NIMH NIH HHS; R01 MH093807 - NIMH NIH HHS; R01 MH112169 - NIMH NIH HHS; R01 EB022864 - NIBIB NIH HHS; R01 MH080007 - NIMH NIH HHSPublished versio

    On the pertinence of a numerical transmission model for neural information

    No full text
    In this thesis we bring together advanced mathematical tools and signal processing methods to address three very important problems in neuroscience: neural action potential (spike) detection, neural spike sorting and directly neural coding. Starting from extracellular neural recordings, we first address the question of spike detection. The spike time occurrences appear (as irregularities) explicitly in the distributional derivatives of the neural signal. The problem is seen as a change point detection problem. Using operational calculus, which provides a convenient framework to handle such distributional derivatives, we characterize the time occurrence of a spike by an explicit formula. After spike detection we address the spike sorting problem. We developed a simple algorithm for a case when multi-channel recordings are available. The algorithm uses an iterative application of ICA and a deflation technique in two nested loops. In each iteration of the external loop, the spiking activity of one neuron is singled out and then deflated from the recordings. The internal loop implements a sequence of ICA and spike detection for removing the noise and all the spikes that are not coming from the targeted neuron. Finally, we discuss on properties of the neural code. We investigate whether the nature of the neural code is discrete or continuous. Moreover, if it is discrete, whether the elements of the code are drawn from a finite alphabet. We particularly address pulse-position coding scheme, making a link between communication theory and neural code.Dans cette thèse, nous utilisons un ensemble d'outils avancés de mathématiques et de méthodes de traitement du signal pour aborder trois problèmes importants en neuroscience: la détection de potentiels d'action (spikes), leur tri, et le codage neuronal. A partir d'enregistrements invasifs de l'activité de neurones, nous avons d'abord abordé la question de la détection de spikes. Les instants d'occurrence des spikes apparaissent (comme des irrégularités) de manière explicite dans les dérivées distributionnelles du signal neuronal. Le problème est alors posé en termes de détection et estimation conjointe de ces instants d'occurrence. En utilisant le calcul opérationnel, qui fournit un cadre commode pour manipuler des Dirac, nous caractérisons le temps d'occurrence d'un spike par une formule explicite. Après la détection nous abordons le problème du tri de spikes. Nous avons développé un algorithme simple pour des enregistrements multi-canaux. Il utilise une application itérative de l'algorithme ICA et une technique de déflation dans deux boucles imbriquées. Dans chaque itération de la boucle externe, l'activité d'un neurone est ciblée, puis isolée et ensuite élaguée des enregistrements. La boucle interne met en œuvre une séquence d'applications de ICA et de détection afin de supprimer tout ce qui ne relève pas du neurone cible. Enfin, nous menons une discussion sur les propriétés du code neuronal. Nous examinons l'hypothèse d'un code reposant sur un alphabet fini. Cette hypothèse correspond à un schéma de modulation PPM qui est déjà suggéré par le codage temporel et la présence de jitter
    corecore